Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 16.189
Filter
1.
Elife ; 122024 May 13.
Article in English | MEDLINE | ID: mdl-38739437

ABSTRACT

In several large-scale replication projects, statistically non-significant results in both the original and the replication study have been interpreted as a 'replication success.' Here, we discuss the logical problems with this approach: Non-significance in both studies does not ensure that the studies provide evidence for the absence of an effect and 'replication success' can virtually always be achieved if the sample sizes are small enough. In addition, the relevant error rates are not controlled. We show how methods, such as equivalence testing and Bayes factors, can be used to adequately quantify the evidence for the absence of an effect and how they can be applied in the replication setting. Using data from the Reproducibility Project: Cancer Biology, the Experimental Philosophy Replicability Project, and the Reproducibility Project: Psychology we illustrate that many original and replication studies with 'null results' are in fact inconclusive. We conclude that it is important to also replicate studies with statistically non-significant results, but that they should be designed, analyzed, and interpreted appropriately.


Subject(s)
Bayes Theorem , Reproducibility of Results , Humans , Research Design , Sample Size , Data Interpretation, Statistical
2.
Croat Med J ; 65(2): 122-137, 2024 Apr 30.
Article in English | MEDLINE | ID: mdl-38706238

ABSTRACT

AIM: To compare the effectiveness of artificial neural network (ANN) and traditional statistical analysis on identical data sets within the splenectomy-middle carotid artery occlusion (MCAO) mouse model. METHODS: Mice were divided into the splenectomized (SPLX) and sham-operated (SPLX-sham) group. A splenectomy was conducted 14 days before middle carotid artery occlusion (MCAO). Magnetic resonance imaging (MRI), bioluminescent imaging, neurological scoring (NS), and histological analysis, were conducted at two, four, seven, and 28 days after MCAO. Frequentist statistical analyses and ANN analysis employing a multi-layer perceptron architecture were performed to assess the probability of discriminating between SPLX and SPLX-sham mice. RESULTS: Repeated measures ANOVA showed no significant differences in body weight (F (5, 45)=0.696, P=0.629), NS (F (2.024, 18.218)=1.032, P=0.377) and brain infarct size on MRI between the SPLX and SPLX-sham groups post-MCAO (F (2, 24)=0.267, P=0.768). ANN analysis was employed to predict SPLX and SPL-sham classes. The highest accuracy in predicting SPLX class was observed when the model was trained on a data set containing all variables (0.7736±0.0234). For SPL-sham class, the highest accuracy was achieved when it was trained on a data set excluding the variable combination MR contralateral/animal mass/NS (0.9284±0.0366). CONCLUSION: This study validated the neuroprotective impact of splenectomy in an MCAO model using ANN for data analysis with a reduced animal sample size, demonstrating the potential for leveraging advanced statistical methods to minimize sample sizes in experimental biomedical research.


Subject(s)
Disease Models, Animal , Infarction, Middle Cerebral Artery , Magnetic Resonance Imaging , Neural Networks, Computer , Splenectomy , Animals , Mice , Splenectomy/methods , Infarction, Middle Cerebral Artery/surgery , Infarction, Middle Cerebral Artery/diagnostic imaging , Sample Size , Male
3.
Trials ; 25(1): 312, 2024 May 09.
Article in English | MEDLINE | ID: mdl-38725072

ABSTRACT

BACKGROUND: Clinical trials often involve some form of interim monitoring to determine futility before planned trial completion. While many options for interim monitoring exist (e.g., alpha-spending, conditional power), nonparametric based interim monitoring methods are also needed to account for more complex trial designs and analyses. The upstrap is one recently proposed nonparametric method that may be applied for interim monitoring. METHODS: Upstrapping is motivated by the case resampling bootstrap and involves repeatedly sampling with replacement from the interim data to simulate thousands of fully enrolled trials. The p-value is calculated for each upstrapped trial and the proportion of upstrapped trials for which the p-value criteria are met is compared with a pre-specified decision threshold. To evaluate the potential utility for upstrapping as a form of interim futility monitoring, we conducted a simulation study considering different sample sizes with several different proposed calibration strategies for the upstrap. We first compared trial rejection rates across a selection of threshold combinations to validate the upstrapping method. Then, we applied upstrapping methods to simulated clinical trial data, directly comparing their performance with more traditional alpha-spending and conditional power interim monitoring methods for futility. RESULTS: The method validation demonstrated that upstrapping is much more likely to find evidence of futility in the null scenario than the alternative across a variety of simulations settings. Our three proposed approaches for calibration of the upstrap had different strengths depending on the stopping rules used. Compared to O'Brien-Fleming group sequential methods, upstrapped approaches had type I error rates that differed by at most 1.7% and expected sample size was 2-22% lower in the null scenario, while in the alternative scenario power fluctuated between 15.7% lower and 0.2% higher and expected sample size was 0-15% lower. CONCLUSIONS: In this proof-of-concept simulation study, we evaluated the potential for upstrapping as a resampling-based method for futility monitoring in clinical trials. The trade-offs in expected sample size, power, and type I error rate control indicate that the upstrap can be calibrated to implement futility monitoring with varying degrees of aggressiveness and that performance similarities can be identified relative to considered alpha-spending and conditional power futility monitoring methods.


Subject(s)
Clinical Trials as Topic , Computer Simulation , Medical Futility , Research Design , Humans , Clinical Trials as Topic/methods , Sample Size , Data Interpretation, Statistical , Models, Statistical , Treatment Outcome
4.
BMC Med Res Methodol ; 24(1): 110, 2024 May 07.
Article in English | MEDLINE | ID: mdl-38714936

ABSTRACT

Bayesian statistics plays a pivotal role in advancing medical science by enabling healthcare companies, regulators, and stakeholders to assess the safety and efficacy of new treatments, interventions, and medical procedures. The Bayesian framework offers a unique advantage over the classical framework, especially when incorporating prior information into a new trial with quality external data, such as historical data or another source of co-data. In recent years, there has been a significant increase in regulatory submissions using Bayesian statistics due to its flexibility and ability to provide valuable insights for decision-making, addressing the modern complexity of clinical trials where frequentist trials are inadequate. For regulatory submissions, companies often need to consider the frequentist operating characteristics of the Bayesian analysis strategy, regardless of the design complexity. In particular, the focus is on the frequentist type I error rate and power for all realistic alternatives. This tutorial review aims to provide a comprehensive overview of the use of Bayesian statistics in sample size determination, control of type I error rate, multiplicity adjustments, external data borrowing, etc., in the regulatory environment of clinical trials. Fundamental concepts of Bayesian sample size determination and illustrative examples are provided to serve as a valuable resource for researchers, clinicians, and statisticians seeking to develop more complex and innovative designs.


Subject(s)
Bayes Theorem , Clinical Trials as Topic , Humans , Clinical Trials as Topic/methods , Clinical Trials as Topic/statistics & numerical data , Research Design/standards , Sample Size , Data Interpretation, Statistical , Models, Statistical
5.
Am J Physiol Heart Circ Physiol ; 326(6): H1420-H1423, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38700473

ABSTRACT

The use of both sexes or genders should be considered in experimental design, analysis, and reporting. Since there is no requirement to double the sample size or to have sufficient power to study sex differences, challenges for the statistical analysis can arise. In this article, we focus on the topics of statistical power and ways to increase this power. We also discuss the choice of an appropriate design and statistical method and include a separate section on equivalence tests needed to show the absence of a relevant difference.


Subject(s)
Research Design , Humans , Data Interpretation, Statistical , Sample Size , Female , Male , Animals , Sex Factors , Models, Statistical
8.
Expert Rev Vaccines ; 23(1): 523-534, 2024.
Article in English | MEDLINE | ID: mdl-38682812

ABSTRACT

BACKGROUND: Traditional vaccine development, often a lengthy and costly process of three separated phases. However, the swift development of COVID-19 vaccines highlighted the critical importance of accelerating the approval of vaccines. This article showcases a seamless phase 2/3 trial design to expedite the development process, particularly for multi-valent vaccines. RESEARCH DESIGN AND METHODS: This study utilizes simulation to compare the performance of seamless phase 2/3 design with that of conventional trial design, specifically by re-envisioning a 9-valent HPV vaccine trial. Across three cases, several key performance metrics are evaluated: overall power, type I error rate, average sample size, trial duration, the percentage of early stop, and the accuracy of dose selection. RESULTS: On average, when the experimental vaccine was assumed to be effective, the seamless design that performed interim analyses based solely on efficacy saved 555.73 subjects, shortened trials by 10.29 months, and increased power by 3.70%. When the experimental vaccine was less effective than control, it saved an average of 887.73 subjects while maintaining the type I error rate below 0.025. CONCLUSION: The seamless design proves to be a compelling strategy for vaccine development, given its versatility in early stopping, re-estimating sample sizes, and shortening trial durations.


Subject(s)
COVID-19 Vaccines , COVID-19 , Clinical Trials, Phase II as Topic , Clinical Trials, Phase III as Topic , Research Design , Vaccine Development , Humans , COVID-19 Vaccines/administration & dosage , COVID-19 Vaccines/immunology , COVID-19/prevention & control , Vaccine Development/methods , Sample Size , Papillomavirus Vaccines/administration & dosage , Papillomavirus Vaccines/immunology , Computer Simulation
9.
Zhongguo Yi Xue Ke Xue Yuan Xue Bao ; 46(2): 225-231, 2024 Apr.
Article in Chinese | MEDLINE | ID: mdl-38686719

ABSTRACT

Objective To develop and verify the sample size formulas for quantitative data consistency evaluation based on the least square regression method. Methods According to the principle of least square regression-based quantitative consistency evaluation,statistical inference,and formula derivation,we developed the formulas for calculating sample size based on regression constant and regression coefficient.Furthermore,the accuracy of the formulas was verified by the data of three examples,and the results were compared with those of the sample size formula established based on the Bland-Altman(BA)method. Results The sample size formulas for regression-based quantitative consistency evaluation were deduced,and the accuracy of the formulas was verified by three examples.In addition,the results obtained with this formula had differences compared with those of the sample size formula established based on the BA method.Furthermore,consistent conclusions could be obtained by regression analysis and BA analysis with the sample size calculated with the regression method.However,with the sample size calculated based on the BA method,the consistency conclusion of regression analysis and BA analysis was sometimes not valid. Conclusion A sample size formula for quantitative consistency evaluation based on the regression method was proposed for the first time,which provided methodological support for the research in this field.


Subject(s)
Sample Size , Least-Squares Analysis , Regression Analysis
10.
Pharmaceut Med ; 38(3): 225-239, 2024 May.
Article in English | MEDLINE | ID: mdl-38684588

ABSTRACT

BACKGROUND: The Japanese biosimilar guideline requires that the sponsors conduct clinical studies such as comparative pharmacokinetic (PK), pharmacodynamic (PD), or efficacy studies. In each biosimilar development, the sponsors consider the clinical data package, and thus clinical data packages vary among biosimilar developments. OBJECTIVES: The aim of this study was to elucidate the clinical data packages for the biosimilars approved in Japan. The details of clinical data packages and sample size for the regulatory approvals of biosimilars in Japan was reported. METHODS: We surveyed the clinical data packages and sample size based on the Pharmaceuticals and Medical Devices Agency (PMDA) website review reports between 2009 and 2023. RESULTS: Twenty-four biosimilars have been approved based on the comparative PK and efficacy studies, 10 biosimilars have been approved based on the comparative PK/PD study, and one biosimilar has been approved based on the comparative efficacy study. Regarding the sample size, comparative PK studies were conducted in healthy volunteers or patients for up to 300 cases, although the majority enrolled only 1-100 cases (68.1%, 32/47). Comparative PD studies enrolling 1-30, 31-60, and 61-90 cases totaled 4, 7, and 4 cases, respectively. Finally, comparative efficacy studies enrolling 1-300, 301-600, and 601-900 totaled 6, 10, and 11 cases, respectively. In particular, the oncology and rheumatology areas were the first and second disease areas recruiting 601-900 patients. CONCLUSION: Large numbers of patients were enrolled to conduct a comparative efficacy study. Efficient biosimilar development should be considered on the basis of the accumulation of scientific understanding of comparable features of biosimilars and their development.


Subject(s)
Biosimilar Pharmaceuticals , Drug Approval , Biosimilar Pharmaceuticals/pharmacokinetics , Biosimilar Pharmaceuticals/therapeutic use , Humans , Sample Size , Japan , Surveys and Questionnaires , Clinical Trials as Topic , Drug Development
11.
Stat Med ; 43(12): 2439-2451, 2024 May 30.
Article in English | MEDLINE | ID: mdl-38594809

ABSTRACT

Enrolling patients to the standard of care (SOC) arm in randomized clinical trials, especially for rare diseases, can be very challenging due to the lack of resources, restricted patient population availability, and ethical considerations. As the therapeutic effect for the SOC is often well documented in historical trials, we propose a Bayesian platform trial design with hybrid control based on the multisource exchangeability modelling (MEM) framework to harness historical control data. The MEM approach provides a computationally efficient method to formally evaluate the exchangeability of study outcomes between different data sources and allows us to make better informed data borrowing decisions based on the exchangeability between historical and concurrent data. We conduct extensive simulation studies to evaluate the proposed hybrid design. We demonstrate the proposed design leads to significant sample size reduction for the internal control arm and borrows more information compared to competing Bayesian approaches when historical and internal data are compatible.


Subject(s)
Bayes Theorem , Computer Simulation , Models, Statistical , Randomized Controlled Trials as Topic , Humans , Randomized Controlled Trials as Topic/methods , Sample Size , Research Design
12.
Stat Med ; 43(12): 2472-2485, 2024 May 30.
Article in English | MEDLINE | ID: mdl-38605556

ABSTRACT

The statistical methodology for model-based dose finding under model uncertainty has attracted increasing attention in recent years. While the underlying principles are simple and easy to understand, developing and implementing an efficient approach for binary responses can be a formidable task in practice. Motivated by the statistical challenges encountered in a phase II dose finding study, we explore several key design and analysis issues related to the hybrid testing-modeling approaches for binary responses. The issues include candidate model selection and specifications, optimal design and efficient sample size allocations, and, notably, the methods for dose-response testing and estimation. Specifically, we consider a class of generalized linear models suited for the candidate set and establish D-optimal designs for these models. Additionally, we propose using permutation-based tests for dose-response testing to avoid asymptotic normality assumptions typically required for contrast-based tests. We perform trial simulations to enhance our understanding of these issues.


Subject(s)
Computer Simulation , Dose-Response Relationship, Drug , Models, Statistical , Humans , Uncertainty , Linear Models , Clinical Trials, Phase II as Topic/methods , Clinical Trials, Phase II as Topic/statistics & numerical data , Sample Size , Research Design , Data Interpretation, Statistical
13.
J Exp Psychol Gen ; 153(4): 1139-1151, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38587935

ABSTRACT

The calculation of statistical power has been taken up as a simple yet informative tool to assist in designing an experiment, particularly in justifying sample size. A difficulty with using power for this purpose is that the classical power formula does not incorporate sources of uncertainty (e.g., sampling variability) that can impact the computed power value, leading to a false sense of precision and confidence in design choices. We use simulations to demonstrate the consequences of adding two common sources of uncertainty to the calculation of power. Sampling variability in the estimated effect size (Cohen's d) can introduce a large amount of uncertainty (e.g., sometimes producing rather flat distributions) in power and sample-size determination. The addition of random fluctuations in the population effect size can cause values of its estimates to take on a sign opposite the population value, making calculated power values meaningless. These results suggest that calculated power values or use of such values to justify sample size add little to planning a study. As a result, researchers should put little confidence in power-based choices when planning future studies. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Uncertainty , Humans , Sample Size
14.
Biometrics ; 80(2)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38591365

ABSTRACT

A spatial sampling design determines where sample locations are placed in a study area so that population parameters can be estimated with relatively high precision. If the response variable has spatial trends, spatially balanced or well-spread designs give precise results for commonly used estimators. This article proposes a new method that draws well-spread samples over arbitrary auxiliary spaces and can be used for master sampling applications. All we require is a measure of the distance between population units. Numerical results show that the method generates well-spread samples and compares favorably with existing designs. We provide an example application using several auxiliary variables to estimate total aboveground biomass over a large study area in Eastern Amazonia, Brazil. Multipurpose surveys are also considered, where the totals of aboveground biomass, primary production, and clay content (3 responses) are estimated from a single well-spread sample over the auxiliary space.


Subject(s)
Sample Size , Surveys and Questionnaires
15.
Br J Math Stat Psychol ; 77(2): 289-315, 2024 May.
Article in English | MEDLINE | ID: mdl-38591555

ABSTRACT

Popular statistical software provides the Bayesian information criterion (BIC) for multi-level models or linear mixed models. However, it has been observed that the combination of statistical literature and software documentation has led to discrepancies in the formulas of the BIC and uncertainties as to the proper use of the BIC in selecting a multi-level model with respect to level-specific fixed and random effects. These discrepancies and uncertainties result from different specifications of sample size in the BIC's penalty term for multi-level models. In this study, we derive the BIC's penalty term for level-specific fixed- and random-effect selection in a two-level nested design. In this new version of BIC, called BIC E 1 , this penalty term is decomposed into two parts if the random-effect variance-covariance matrix has full rank: (a) a term with the log of average sample size per cluster and (b) the total number of parameters times the log of the total number of clusters. Furthermore, we derive the new version of BIC, called BIC E 2 , in the presence of redundant random effects. We show that the derived formulae, BIC E 1 and BIC E 2 , adhere to empirical values via numerical demonstration and that BIC E ( E indicating either E 1 or E 2 ) is the best global selection criterion, as it performs at least as well as BIC with the total sample size and BIC with the number of clusters across various multi-level conditions through a simulation study. In addition, the use of BIC E 1 is illustrated with a textbook example dataset.


Subject(s)
Software , Sample Size , Bayes Theorem , Linear Models , Computer Simulation
16.
JAMA Netw Open ; 7(4): e248818, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38687478

ABSTRACT

Importance: For the design of a randomized clinical trial (RCT), estimation of the expected event rate and effect size of an intervention is needed to calculate the sample size. Overestimation may lead to an underpowered trial. Objective: To evaluate the accuracy of published estimates of event rate and effect size in contemporary cardiovascular RCTs. Evidence Review: A systematic search was conducted in MEDLINE for multicenter cardiovascular RCTs associated with MeSH (Medical Subject Headings) terms for cardiovascular diseases published in the New England Journal of Medicine, JAMA, or the Lancet between January 1, 2010, and December 31, 2019. Identified trials underwent abstract review; eligible trials then underwent full review, and those with insufficiently reported data were excluded. Data were extracted from the original publication or the study protocol, and a random-effects model was used for data pooling. This review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses reporting guideline. The primary outcome was the accuracy of event rate and effect size estimation. Accuracy was determined by comparing the observed event rate in the control group and the effect size with their hypothesized values. Linear regression was used to determine the association between estimation accuracy and trial characteristics. Findings: Of the 873 RCTs identified, 374 underwent full review and 30 were subsequently excluded, resulting in 344 trials for analysis. The median observed event rate was 9.0% (IQR, 4.3% to 21.4%), which was significantly lower than the estimated event rate of 11.0% (IQR, 6.0% to 25.0%) with a median deviation of -12.3% (95% CI, -16.4% to -5.6%; P < .001). More than half of the trials (196 [61.1%]) overestimated the expected event rate. Accuracy of event rate estimation was associated with a higher likelihood of refuting the null hypothesis (0.13 [95% CI, 0.01 to 0.25]; P = .03). The median observed effect size in superiority trials was 0.91 (IQR, 0.74 to 0.99), which was significantly lower than the estimated effect size of 0.72 (IQR, 0.60 to 0.80), indicating a median overestimation of 23.1% (95% CI, 17.9% to 28.3%). A total of 216 trials (82.1%) overestimated the effect size. Conclusions and Relevance: In this systematic review of contemporary cardiovascular RCTs, event rates of the primary end point and effect sizes of an intervention were frequently overestimated. This overestimation may have contributed to the inability to adequately test the trial hypothesis.


Subject(s)
Cardiovascular Diseases , Randomized Controlled Trials as Topic , Humans , Randomized Controlled Trials as Topic/standards , Randomized Controlled Trials as Topic/statistics & numerical data , Research Design/standards , Sample Size
17.
Hum Vaccin Immunother ; 20(1): 2340692, 2024 Dec 31.
Article in English | MEDLINE | ID: mdl-38658140

ABSTRACT

The COVID-19 pandemic required the rapid development of COVID-19 vaccines and treatments, necessitating quick yet representative clinical trial enrollment to evaluate these preventive measures. However, misinformation around the COVID-19 pandemic and general concerns about clinical trial participation in the U.S. hindered clinical trial enrollment. This study assessed awareness of, willingness to participate in, and enrollment in COVID-19 vaccine and treatment clinical trials in Texas. A quota sample of 1,089 Texas residents was collected online from June - July 2022. Respondents were asked if they were aware of, willing to participate in, and had enrolled in clinical trials for COVID-19 vaccines or treatments. Overall, 45.8% of respondents reported being aware of clinical trials for COVID-19 treatments or vaccines, but only 21.7% knew how to enroll and only 13.2% had enrolled in a COVID-19 clinical trial. Respondents with bachelor's or graduate degrees were more likely to be aware of clinical trials, more likely to have enrolled in trials, and more willing to participate in treatment trials. Women were less willing to participate and less likely to have enrolled in COVID-19 clinical trials than men. Respondents aged 55 years and older were more willing to participate, but less likely to have enrolled in COVID-19 clinical trials than 18-to-24-year-olds. Common reasons given for not participating in clinical trials included concerns that COVID-19 treatments may not be safe, government distrust, and uncertainty about what clinical trial participation would entail. Substantial progress is needed to build community awareness and increase enrollment in clinical trials.


Subject(s)
COVID-19 Vaccines , Clinical Trials as Topic , Health Knowledge, Attitudes, Practice , Surveys and Questionnaires , COVID-19/prevention & control , Sample Size , Texas , Treatment Refusal/statistics & numerical data , Trust , Patient Safety , Uncertainty , Patient Selection , Humans , Male , Female , Adolescent , Young Adult , Adult , Middle Aged
18.
Vet Med Sci ; 10(3): e1444, 2024 05.
Article in English | MEDLINE | ID: mdl-38581306

ABSTRACT

BACKGROUND: Genome-wide association studies (GWAS) is a useful tool for the detection of disease or quantitative trait-related genetic variations in the veterinary field. For a binary trait, a case/control experiment is designed in GWAS. However, there is limited information on the optimal case/control and sample size in GWAS. OBJECTIVES: In this study, it was aimed to detect the effects of case/control ratio and sample size for GWAS using computer simulation under certain assumptions. METHOD: Using the PLINK software, we simulated three different disease scenarios. In scenario 1, we simulated 10 different case/control ratios with increasing ratio of cases to controls. In scenario 2, we did versa of scenario 1 with the increasing ratio of controls to cases. In scenarios 1 and 2, sample size gradually was increased with the change case/control ratios. In scenario 3, the total sample size was fixed to 2000 to see real effects of case/control ratio on the number of disease-related single nucleotide polymorphisms (SNPs). RESULTS: The results showed that the number of disease-related SNPs were the highest when the case/control ratio is close to 1:1 in scenarios 1 and 2 and did not change with an increase in sample size. Similarly, the number of disease-related SNPs was the highest in case/control ratios 1:1 in scenario 3. However, unbalanced case/control ratio caused the detection of lower number of disease-related SNPs in scenario 3. The estimated average power of SNPs was highest when case/control ratio is 1:1 in all scenarios. CONCLUSIONS: All findings led to the conclusion that an increase in sample size may enhance the statistical power of GWAS when the number of cases is small. In addition, case/control ratio 1:1 may be the optimal ratio for GWAS. These findings may be valuable not only for veterinary field but also for human clinical experiments.


Subject(s)
Genome-Wide Association Study , Polymorphism, Single Nucleotide , Humans , Animals , Genome-Wide Association Study/veterinary , Genome-Wide Association Study/methods , Computer Simulation , Sample Size , Phenotype
19.
Stat Med ; 43(10): 2007-2042, 2024 May 10.
Article in English | MEDLINE | ID: mdl-38634309

ABSTRACT

Quantile regression, known as a robust alternative to linear regression, has been widely used in statistical modeling and inference. In this paper, we propose a penalized weighted convolution-type smoothed method for variable selection and robust parameter estimation of the quantile regression with high dimensional longitudinal data. The proposed method utilizes a twice-differentiable and smoothed loss function instead of the check function in quantile regression without penalty, and can select the important covariates consistently using the efficient gradient-based iterative algorithms when the dimension of covariates is larger than the sample size. Moreover, the proposed method can circumvent the influence of outliers in the response variable and/or the covariates. To incorporate the correlation within each subject and enhance the accuracy of the parameter estimation, a two-step weighted estimation method is also established. Furthermore, we prove the oracle properties of the proposed method under some regularity conditions. Finally, the performance of the proposed method is demonstrated by simulation studies and two real examples.


Subject(s)
Algorithms , Models, Statistical , Humans , Computer Simulation , Linear Models , Sample Size
20.
Stat Med ; 43(10): 1973-1992, 2024 May 10.
Article in English | MEDLINE | ID: mdl-38634314

ABSTRACT

The expected value of the standard power function of a test, computed with respect to a design prior distribution, is often used to evaluate the probability of success of an experiment. However, looking only at the expected value might be reductive. Instead, the whole probability distribution of the power function induced by the design prior can be exploited. In this article we consider one-sided testing for the scale parameter of exponential families and we derive general unifying expressions for cumulative distribution and density functions of the random power. Sample size determination criteria based on alternative summaries of these functions are discussed. The study sheds light on the relevance of the choice of the design prior in order to construct a successful experiment.


Subject(s)
Bayes Theorem , Humans , Probability , Sample Size
SELECTION OF CITATIONS
SEARCH DETAIL
...